"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky
Update: 2025-12-21
Description
At many points now, I've been asked in private for a critique of EA / EA's history / EA's impact and I have ad-libbed statements that I feel guilty about because they have not been subjected to EA critique and refutation. I need to write up my take and let you all try to shoot it down.
Before I can or should try to write up that take, I need to fact-check one of my take-central beliefs about how the last couple of decades have gone down. My belief is that the Open Philanthropy Project, EA generally, and Oxford EA particularly, had bad AI timelines and bad ASI ruin conditional probabilities; and that these invalidly arrived-at beliefs were in control of funding, and were explicitly publicly promoted at the expense of saner beliefs.
An exemplar of OpenPhil / Oxford EA reasoning about timelines is that, as late as 2020, their position on timelines seemed to center on Ajeya Cotra's "Biological Timelines" estimate which put median timelines to AGI at 30 years later. Leadership dissent from this viewpoint, as I recall, generally centered on having longer rather than shorter median timelines.
An exemplar of poor positioning on AI ruin is [...]
---
First published:
December 20th, 2025
Source:
https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs
---
Narrated by TYPE III AUDIO.
Before I can or should try to write up that take, I need to fact-check one of my take-central beliefs about how the last couple of decades have gone down. My belief is that the Open Philanthropy Project, EA generally, and Oxford EA particularly, had bad AI timelines and bad ASI ruin conditional probabilities; and that these invalidly arrived-at beliefs were in control of funding, and were explicitly publicly promoted at the expense of saner beliefs.
An exemplar of OpenPhil / Oxford EA reasoning about timelines is that, as late as 2020, their position on timelines seemed to center on Ajeya Cotra's "Biological Timelines" estimate which put median timelines to AGI at 30 years later. Leadership dissent from this viewpoint, as I recall, generally centered on having longer rather than shorter median timelines.
An exemplar of poor positioning on AI ruin is [...]
---
First published:
December 20th, 2025
Source:
https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs
---
Narrated by TYPE III AUDIO.
Comments
In Channel




